Thought experiment
A to-do-list that gets it done for you
Wouldn’t it be great if your to-do-list wasn’t just a primitive data store, but would actually help you to get things done? If it wasn’t just an app, but connected with your personal assistant, that knows a lot about you and your tasks at hand?
When I’ve read that Wunderlist was acquired by Microsoft, I wasn’t surprised at all. First of all: I’m very happy for the team of 6Wunderkinder. I think they were quite influential for Berlin’s early recognition as a startup city.
After acquiring products such as Acompli Email and Sunrise Calendar, it seems that Microsoft is picking the well-designed and highly adopted cherries out of the world of cross-platform productivity apps. Having them under one hood makes a lot of sense for a company that is making an operating system.
When you look at the apps in isolation, they are actually quite simple and limited in how they can help you to be more productive. Each of them focusses on a very specific domain. Email enables you to communicate with friends and colleagues, the calendar helps you to plan your life and a to-do-list basically stores your to-dos that you can tick-off.
But because an app like Wunderlist doesn’t seem to know much about the context or the meaning of a task, it can only remind you of something that you want to get done. It doesn’t do anything for you to actually get it done.
It’s time to talk…
Let’s take the following examples for to-dos from Wunderlist’s website:
- Book a flight to Paris
- Buy Interstellar DVD
- Finalize presentation
- Buy coffee beans
These prototypical action items are not only short and thus fast to capture, but by putting a verb first, they are also easy to grasp.
Verbs help pull us into our Action Steps at first glance, efficiently indicating what type of action is required. — Scott Belsky
Still, lots of contextual questions may arise. When and why do you want to go to Paris? What presentation? What kind of coffee do you like? What is the line between spending too much time on adding all the details for a
to-do and keeping it too short so that you can’t make much sense out of it?
Now let’s imagine that the to-do list is able to talk to other apps, such as your calendar, your emails or your reading list. Imagine that it is part of a personal assistant.
Imagine it is Saturday evening, you’re relaxing on your couch and ask loudly “What do I need to get done?” only to hear your living room speakers respond:
Assistant: “Let me check… You need to book a flight to paris and buy two things.”
You: “Okay, let’s do the shopping first.”
Assistant: “First, you want to buy the Interstellar DVD. It’s $9.99 on Amazon but it’s also available on Netflix. What do you think?”
You: “Oh, awesome. I will watch it on Netflix then.”
Assistant: “Fine, I added it to Netflix. Next on my list is coffee. The last time we bought 250g filter roast from Coffee Circle. Did you like it?”
You: “No, not so much. Ask my girlfriend if she can buy coffee at our favorite café”.
Assistant: “Okay, I texted her.”
You: “Perfect. Let’s book the flight now. I want to arrive a day before the conference.”
Assistant: “So the conference in Paris is from August 22–23. Do you want to fly after 9 AM, as usual?”
You: “Sure!”
Assistant: “I’ve found a flight with Swiss Airlines for $300, departing on Friday 9:35 AM and returning on Sunday 7:10 PM. It’s $45 more expensive than the cheapest, but more convenient for you.
You: “Okay, open it on my computer.”
Assistant: “As you wish. That’s it. There is only one more task but it is work-related.”
A truly personal assistant
You might think: This should already be possible with Siri, Google Now or Cortana, right? Thanks to advances in deep learning, these systems are very good at understanding our voice and responding to our inputs.
They are great toys to show off advances in speech recognition technology, but speaking only makes sense in your private home, while driving a car, or when you do manual labor. Talking to a machine does not really appeal in an open office space, where not only productivity apps are omnipresent, but also your colleagues are all around. So the possibility to type and chat with your assistant should be a given.
In the best case, you could select a to-do, such as “Finalize presentation” and the system would simply open the presentation for you, alongside with the folder that holds all necessary documents, without loosing a thought about what app you need to launch and where that deeply nested folder was.
In fact, your personal assistant needs to be accessible anywhere, on any device and operating system, by speaking or typing — whatever the most appropriate form of input for any given time and location is.
In addition, a truly personal assistant should be able to do more than just capturing things you tell it right now. Instead of just adding, viewing and checking off to-dos for you, a personal assistant should detect actionable information from your to-dos and autonomously act upon it. By interpreting and aggregating information from different sources, it should proactively propose solutions and take work out of your hands — all while you are busy with something else.
Conclusion
I really believe in the power that natural language can bring to the field of interaction design. I also think that programming and design tools could benefit from it and become less cryptic and more intuitive.
Smart Watches and their successors might also play a role in the game. The availability of a microphone on your wrist already makes it much more convenient to dictate everyday commands.
But the more important personal information becomes, the more important it is, that we find ways to keep our information private. We shouldn’t rely on speech recognition servers somewhere in the world and we shouldn’t feed web services that aggregate all our personal information. It is very daunting to live a simpler life in collaboration with technology. But to be able to rely on it, it might need to take some physical form. Like a diary, that we can own, lock, hide and destroy. We should be able to take control.
Before the start of the WWDC 2015, I want to remind you that the whole idea of a personal assistant is nothing new. See how Apple under its CEO John Sculley was imagining the future of computing almost 30 years ago. I’m very excited to see what’s coming next.
Inspiration & further reading
- Google Search Will Be Your Next Brain
- Futures of Text
- Baidu claims deep learning breakthrough with Deep Speech
- Google Now Expanding To Third Party Applications
- Ubiquity: Command the Web with Language
- Sound Hound: Speech to Meaning
- Facebook Acquires Wit.ai To Help Its Developers With Speech Recognition And Voice Interfaces
- Imagining MessageKit: Apple’s path to turning iMessage into a platform
- The movie “Her”